55 research outputs found
Design and Analysis of a Labview and Arduino-Based Automatic Solar Tracking System
A Thesis Presented to the Faculty of the College of Science and Technology Morehead State University in Partial Fulfillment of the requirements for the Degree Master of Science by Caiwen Ding April 24, 201
Aerial Manipulation Using a Novel Unmanned Aerial Vehicle Cyber-Physical System
Unmanned Aerial Vehicles(UAVs) are attaining more and more maneuverability
and sensory ability as a promising teleoperation platform for intelligent
interaction with the environments. This work presents a novel
5-degree-of-freedom (DoF) unmanned aerial vehicle (UAV) cyber-physical system
for aerial manipulation. This UAV's body is capable of exerting powerful
propulsion force in the longitudinal direction, decoupling the translational
dynamics and the rotational dynamics on the longitudinal plane. A high-level
impedance control law is proposed to drive the vehicle for trajectory tracking
and interaction with the environments. In addition, a vision-based real-time
target identification and tracking method integrating a YOLO v3 real-time
object detector with feature tracking, and morphological operations is proposed
to be implemented onboard the vehicle with support of model compression
techniques to eliminate latency caused by video wireless transmission and heavy
computation burden on traditional teleoperation platforms.Comment: Newsletter of IEEE Technical Committee on Cyber-Physical System
Efficient Traffic State Forecasting using Spatio-Temporal Network Dependencies: A Sparse Graph Neural Network Approach
Traffic state prediction in a transportation network is paramount for
effective traffic operations and management, as well as informed user and
system-level decision-making. However, long-term traffic prediction (beyond 30
minutes into the future) remains challenging in current research. In this work,
we integrate the spatio-temporal dependencies in the transportation network
from network modeling, together with the graph convolutional network (GCN) and
graph attention network (GAT). To further tackle the dramatic computation and
memory cost caused by the giant model size (i.e., number of weights) caused by
multiple cascaded layers, we propose sparse training to mitigate the training
cost, while preserving the prediction accuracy. It is a process of training
using a fixed number of nonzero weights in each layer in each iteration. We
consider the problem of long-term traffic speed forecasting for a real
large-scale transportation network data from the California Department of
Transportation (Caltrans) Performance Measurement System (PeMS). Experimental
results show that the proposed GCN-STGT and GAT-STGT models achieve low
prediction errors on short-, mid- and long-term prediction horizons, of 15, 30
and 45 minutes in duration, respectively. Using our sparse training, we could
train from scratch with high sparsity (e.g., up to 90%), equivalent to 10 times
floating point operations per second (FLOPs) reduction on computational cost
using the same epochs as dense training, and arrive at a model with very small
accuracy loss compared with the original dense trainin
Boosting Logical Reasoning in Large Language Models through a New Framework: The Graph of Thought
Recent advancements in large-scale models, such as GPT-4, have showcased
remarkable capabilities in addressing standard queries. However, when facing
complex problems that require multi-step logical reasoning, their accuracy
dramatically decreases. Current research has explored the realm of
\textit{prompting engineering} to bolster the inferential capacities of these
models. Our paper unveils a pioneering prompting technique, dubbed
\textit{Graph of Thoughts (GoT)}. Through testing on a trio of escalating
challenges: the 24-point game, resolution of high-degree polynomial equations,
and derivation of formulas for recursive sequences, our method outperformed
GPT-4, achieving accuracy improvements of , , and for each
respective task. Moreover, when juxtaposed with the state-of-the-art (SOTA)
prompting method, \textit{Tree of Thought (ToT)}, our approach registered an
average accuracy boost of , , and
Spectral-DP: Differentially Private Deep Learning through Spectral Perturbation and Filtering
Differential privacy is a widely accepted measure of privacy in the context
of deep learning algorithms, and achieving it relies on a noisy training
approach known as differentially private stochastic gradient descent (DP-SGD).
DP-SGD requires direct noise addition to every gradient in a dense neural
network, the privacy is achieved at a significant utility cost. In this work,
we present Spectral-DP, a new differentially private learning approach which
combines gradient perturbation in the spectral domain with spectral filtering
to achieve a desired privacy guarantee with a lower noise scale and thus better
utility. We develop differentially private deep learning methods based on
Spectral-DP for architectures that contain both convolution and fully connected
layers. In particular, for fully connected layers, we combine a block-circulant
based spatial restructuring with Spectral-DP to achieve better utility. Through
comprehensive experiments, we study and provide guidelines to implement
Spectral-DP deep learning on benchmark datasets. In comparison with
state-of-the-art DP-SGD based approaches, Spectral-DP is shown to have
uniformly better utility performance in both training from scratch and transfer
learning settings.Comment: Accepted in 2023 IEEE Symposium on Security and Privacy (SP
Towards Zero Memory Footprint Spiking Neural Network Training
Biologically-inspired Spiking Neural Networks (SNNs), processing information
using discrete-time events known as spikes rather than continuous values, have
garnered significant attention due to their hardware-friendly and
energy-efficient characteristics. However, the training of SNNs necessitates a
considerably large memory footprint, given the additional storage requirements
for spikes or events, leading to a complex structure and dynamic setup. In this
paper, to address memory constraint in SNN training, we introduce an innovative
framework, characterized by a remarkably low memory footprint. We \textbf{(i)}
design a reversible SNN node that retains a high level of accuracy. Our design
is able to achieve a reduction in memory usage compared
to the current SNN node. We \textbf{(ii)} propose a unique algorithm to
streamline the backpropagation process of our reversible SNN node. This
significantly trims the backward Floating Point Operations Per Second (FLOPs),
thereby accelerating the training process in comparison to current reversible
layer backpropagation method. By using our algorithm, the training time is able
to be curtailed by relative to existing reversible layer
architectures
- …